50 research outputs found

    Explanation for case-based reasoning via abstract argumentation

    Get PDF
    Case-based reasoning (CBR) is extensively used in AI in support of several applications, to assess a new situation (or case) by recollecting past situations (or cases) and employing the ones most similar to the new situation to give the assessment. In this paper we study properties of a recently proposed method for CBR, based on instantiated Abstract Argumentation and referred to as AA-CBR, for problems where cases are represented by abstract factors and (positive or negative) outcomes, and an outcome for a new case, represented by abstract factors, needs to be established. In addition, we study properties of explanations in AA-CBR and define a new notion of lean explanations that utilize solely relevant cases. Both forms of explanations can be seen as dialogical processes between a proponent and an opponent, with the burden of proof falling on the proponent

    Resolving conflicts in clinical guidelines using argumentation

    Get PDF
    Automatically reasoning with conflicting generic clinical guidelines is a burning issue in patient-centric medical reasoning where patient-specific conditions and goals need to be taken into account. It is even more challenging in the presence of preferences such as patient's wishes and clinician's priorities over goals. We advance a structured argumentation formalism for reasoning with conflicting clinical guidelines, patient-specific information and preferences. Our formalism integrates assumption-based reasoning and goal-driven selection among reasoning outcomes. Specifically, we assume applicability of guideline recommendations concerning the generic goal of patient well-being, resolve conflicts among recommendations using patient's conditions and preferences, and then consider prioritised patient-centered goals to yield non-conflicting, goal-maximising and preference-respecting recommendations. We rely on the state-of-the-art Transition-based Medical Recommendation model for representing guideline recommendations and augment it with context given by the patient's conditions, goals, as well as preferences over recommendations and goals. We establish desirable properties of our approach in terms of sensitivity to recommendation conflicts and patient context

    Interpretability of Gradual Semantics in Abstract Argumentation

    Get PDF
    International audiencergumentation, in the field of Artificial Intelligence, is a for-malism allowing to reason with contradictory information as well as tomodel an exchange of arguments between one or several agents. For thispurpose, many semantics have been defined with, amongst them, grad-ual semantics aiming to assign an acceptability degree to each argument.Although the number of these semantics continues to increase, there iscurrently no method allowing to explain the results returned by thesesemantics. In this paper, we study the interpretability of these seman-tics by measuring, for each argument, the impact of the other argumentson its acceptability degree. We define a new property and show that thescore of an argument returned by a gradual semantics which satisfies thisproperty can also be computed by aggregating the impact of the otherarguments on it. This result allows to provide, for each argument in anargumentation framework, a ranking between arguments from the most to the least impacting ones w.r.t a given gradual semantic

    Recent advances and perspectives on starch nanocomposites for packaging applications

    Get PDF
    Starch nanocomposites are popular and abundant materials in packaging sectors. The aim of this work is to review some of the most popular starch nanocomposite systems that have been used nowadays. Due to a wide range of applicable reinforcements, nanocomposite systems are investigated based on nanofiller type such as nanoclays, polysaccharides and carbonaceous nanofillers. Furthermore, the structures of starch and material preparation methods for their nanocomposites are also mentioned in this review. It is clearly presented that mechanical, thermal and barrier properties of plasticised starch can be improved with well-dispersed nanofillers in starch nanocomposites

    Argumentation-based reasoning with preferences

    No full text
    One of the main objectives of AI is modelling human reasoning. Since preference information is an indispensable component of common-sense reasoning, the two should be studied in tandem. Argumentation is an established branch of AI dedicated to this task. In this paper, we study how argumentation with preferences models human intuition behind a particular decision making scenario concerning reasoning with rules and preferences. To this end, we present an example of a common-sense reasoning problem complemented with a survey of decisions made by human respondents. The survey reveals an answer that contrasts with solutions offered by various argumentation formalisms. We argue that our results call for advancements of approaches to argumentation with preferences as well as for examination of the type of problems of reasoning with preferences put forward in this paper. Our work contributes to the line of research on preference handling in argumentation, and it also enriches the discussions on the increasingly important topic of preference treatment in AI at large

    Complexity results and algorithms for bipolar argumentation

    Get PDF
    Bipolar Argumentation Frameworks (BAFs) admit several interpretations of the support relation and diverging definitions of semantics. Recently, several classes of BAFs have been captured as instances of bipolar Assumption-Based Argumentation, a class of Assumption-Based Argumentation (ABA). In this paper, we establish the complexity of bipolar ABA, and consequently of several classes of BAFs. In addition to the standard five complexity problems, we analyse the rarely-addressed extension enumeration problem too. We also advance backtracking-driven algorithms for enumerating extensions of bipolar ABA frameworks, and consequently of BAFs under several interpretations. We prove soundness and completeness of our algorithms, describe their implementation and provide a scalability evaluation. We thus contribute to the study of the as yet uninvestigated complexity problems of (variously interpreted) BAFs as well as of bipolar ABA, and provide the lacking implementations thereof

    Explanatory predictions with artificial neural networks and argumentation

    Get PDF
    Data-centric AI has proven successful in several domains, but its outputs are often hard to explain. We present an architecture combining Artificial Neural Networks (ANNs) for feature selection and an instance of Abstract Argumentation (AA) for reasoning to provide effective predictions, explain- able both dialectically and logically. In particular, we train an autoencoder to rank features in input ex- amples, and select highest-ranked features to gen- erate an AA framework that can be used for mak- ing and explaining predictions as well as mapped onto logical rules, which can equivalently be used for making predictions and for explaining. We show empirically that our method significantly out- performs ANNs and a decision-tree-based method from which logical rules can also be extracted

    Rational versus Intuitive Outcomes of Reasoning with Preferences: Argumentation Perspective

    No full text
    Reasoning with preference information is a common human activity. As modelling human reasoning is one of the main objectives of AI, reasoning with preferences is an important topic in various fields of AI, such as Knowledge Representation and Reasoning (KR). Argumentation is one particular branch of KR that concerns, among other tasks, modelling common-sense reasoning with preferences. A key issue there, is the lack of consensus on how to deal with preferences. Witnessing this is a multitude of proposals on how to formalise reasoning with preferences in argumentative terms. As a commonality, however, formalisms of argumentation with preferences tend to fulfil various criteria of `"rational" reasoning, notwithstanding the fact that human reasoning is often not `"rational", yet seemingly `"intuitive". In this paper, we study how several formalisms of argumentation with preferences model human intuition behind a particular common-sense reasoning problem. More specifically, we present a common-sense scenario of reasoning with rules and preferences, complemented with a survey of decisions made by human respondents that indicates an "intuitive" solution, and analyse how this problem is tackled in argumentation. We conclude that most approaches to argumentation with preferences afford a ``"rational" solution to the problem, and discuss one recent formalism that yields the "intuitive" solution instead. We argue that our results call for advancements in the area of argumentation with preferences in particular, as well as for further studies of reasoning with preferences in AI at large

    Data-empowered argumentation for dialectically explainable predictions

    Get PDF
    Today’s AI landscape is permeated by plentiful data anddominated by powerful data-centric methods with the potential toimpact a wide range of human sectors. Yet, in some settings this po-tential is hindered by these data-centric AI methods being mostlyopaque. Considerable efforts are currently being devoted to defin-ing methods for explaining black-box techniques in some settings,while the use of transparent methods is being advocated in others,especially when high-stake decisions are involved, as in healthcareand the practice of law. In this paper we advocate a novel transpar-ent paradigm of Data-Empowered Argumentation (DEAr in short)for dialectically explainable predictions. DEAr relies upon the ex-traction of argumentation debates from data, so that the dialecticaloutcomes of these debates amount to predictions (e.g. classifications)that can be explained dialectically. The argumentation debates con-sist of (data) arguments which may not be linguistic in general butmay nonetheless be deemed to be ‘arguments’ in that they are dialec-tically related, for instance by disagreeing on data labels. We illus-trate and experiment with the DEAr paradigm in three settings, mak-ing use, respectively, of categorical data, (annotated) images and text.We show empirically that DEAr is competitive with another transpar-ent model, namely decision trees (DTs), while also providing natu-rally dialectical explanations
    corecore